Generalized Newton Algorithms for Tilt-Stable Minimizers in Nonsmooth Optimization

نویسندگان

چکیده

This paper aims at developing two versions of the generalized Newton method to compute local minimizers for nonsmooth problems unconstrained and constrained optimization that satisfy an importan...

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Quasi-Newton Approach to Nonsmooth Convex Optimization A Quasi-Newton Approach to Nonsmooth Convex Optimization

We extend the well-known BFGS quasi-Newton method and its limited-memory variant (LBFGS) to the optimization of nonsmooth convex objectives. This is done in a rigorous fashion by generalizing three components of BFGS to subdifferentials: The local quadratic model, the identification of a descent direction, and the Wolfe line search conditions. We apply the resulting subLBFGS algorithm to L2-reg...

متن کامل

Higher-Order Minimizers and Generalized -Convexity in Nonsmooth Vector Optimization over Cones

In this paper, we introduce the concept of a (weak) minimizer of order k for a nonsmooth vector optimization problem over cones. Generalized classes of higher-order cone-nonsmooth (F, ρ)convex functions are introduced and sufficient optimality results are proved involving these classes. Also, a unified dual is associated with the considered primal problem, and weak and strong duality results ar...

متن کامل

Nonsmooth optimization via quasi-Newton methods

We investigate the behavior of quasi-Newton algorithms applied to minimize a nonsmooth function f , not necessarily convex. We introduce an inexact line search that generates a sequence of nested intervals containing a set of points of nonzero measure that satisfy the Armijo and Wolfe conditions if f is absolutely continuous along the line. Furthermore, the line search is guaranteed to terminat...

متن کامل

An Approximate Quasi-Newton Bundle-Type Method for Nonsmooth Optimization

and Applied Analysis 3 rate of convergence under some additional assumptions, and it should be noted that we only use the approximate values of the objective function and its subgradients which makes the algorithm easier to implement. Some notations are listed below for presenting the algorithm. (i) ∂f(x) = {ξ ∈ Rn | f(z) ≥ f(x) + ξT(z − x), ∀z ∈ R}, the subdifferential of f at x, and each such...

متن کامل

Local structure and algorithms in nonsmooth optimization

Given a real δ, find stable real polynomials p and q such that the polynomial r(s) = (s − 2δs+ 1)p(s) + (s − 1)q(s) is also stable. (We call a polynomial p stable if its abscissa α(p) = max{Re s : p(s) = 0} is nonpositive.) Clearly the problem is unsolvable if δ = 1, since then r(1) = 0; more delicate results (summarized in [7]) show it remains unsolvable for δ < 1 close to 1. Blondel offered a...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Siam Journal on Optimization

سال: 2021

ISSN: ['1095-7189', '1052-6234']

DOI: https://doi.org/10.1137/20m1329937